Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Characterizing the Efficiency of Static Pricing Schemes as a Function of the Supply The problem of selling a supply of k units to a stream of customers constitutes one of the cornerstones in revenue management. Static pricing schemes (that output the same price to all customers) are commonly used because of their simplicity and their many desirable properties; they are anonymous, nonadaptive, and order oblivious. Although the efficiency of those schemes should improve as the supply k increases, prior work has only focused either on algorithms that aim for a constant approximation that is independent of k or on the setting where k becomes really large. In contrast, this paper characterizes the efficiency of static pricing schemes as a function of the supply. Our approach stems from identifying a “sweet spot” between selling enough items and obtaining enough utility from customers with high valuations. Subsequent work shows that our pricing scheme is the optimal static pricing for every value of k.more » « less
-
DNN training is extremely time-consuming, necessitating efficient multi-accelerator parallelization. Current approaches to parallelizing training primarily use intra-batch parallelization, where a single iteration of training is split over the available workers, but suffer from diminishing returns at higher worker counts. We present PipeDream, a system that adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. Unlike traditional pipelining, DNN training is bi-directional, where a forward pass through the computation graph is followed by a backward pass that uses state and intermediate data computed during the forward pass. Naïve pipelining can thus result in mismatches in state versions used in the forward and backward passes, or excessive pipeline flushes and lower hardware efficiency. To address these challenges, PipeDream versions model parameters for numerically correct gradient computations, and schedules forward and backward passes of different minibatches concurrently on different workers with minimal pipeline stalls. PipeDream also automatically partitions DNN layers among workers to balance work and minimize communication. Extensive experimentation with a range of DNN tasks, models, and hardware configurations shows that PipeDream trains models to high accuracy up to 5.3X faster than commonly used intra-batch parallelism techniques.more » « less
An official website of the United States government
